Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 16, 2025
-
Deeply embedded systems powered by microcontrollers are becoming popular with the emergence of Internet-of-Things (IoT) technology. However, these devices primarily run C/C\({+}{+}\)code and are susceptible to memory bugs, which can potentially lead to both control data attacks and non-control data attacks. Existing defense mechanisms (such as control-flow integrity (CFI), dataflow integrity (DFI) and write integrity testing (WIT), etc.) consume a massive amount of resources, making them less practical in real products. To make it lightweight, we design a bitmap-based allowlist mechanism to unify the storage of the runtime data for protecting both control data and non-control data. The memory requirements are constant and small, regardless of the number of deployed defense mechanisms. We store the allowlist in the TrustZone to ensure its integrity and confidentiality. Meanwhile, we perform an offline analysis to detect potential collisions and make corresponding adjustments when it happens. We have implemented our idea on an ARM Cortex-M-based development board. Our evaluation results show a substantial reduction in memory consumption when deploying the proposed CFI and DFI mechanisms, without compromising runtime performance. Specifically, our prototype enforces CFI and DFI at a cost of just 2.09% performance overhead and 32.56% memory overhead on average.more » « less
-
This paper presents a design approach for the modeling and simulation of ultra-low power (ULP) analog computing machine learning (ML) circuits for seizure detection using EEG signals in wearable health monitoring applications. In this paper, we describe a new analog system modeling and simulation technique to associate power consumption, noise, linearity, and other critical performance parameters of analog circuits with the classification accuracy of a given ML network, which allows to realize a power and performance optimized analog ML hardware implementation based on diverse application-specific needs. We carried out circuit simulations to obtain non-idealities, which are then mathematically modeled for an accurate mapping. We have modeled noise, non-linearity, resolution, and process variations such that the model can accurately obtain the classification accuracy of the analog computing based seizure detection system. Noise has been modeled as an input-referred white noise that can be directly added at the input. Device process and temperature variations were modeled as random fluctuations in circuit parameters such as gain and cut-off frequency. Nonlinearity was mathematically modeled as a power series. The combined system level model was then simulated for classification accuracy assessments. The design approach helps to optimize power and area during the development of tailored analog circuits for ML networks with the ability to potentially trade power and performance goals while still ensuring the required classification accuracy. The simulation technique also enables to determine target specifications for each circuit block in the analog computing hardware. This is achieved by developing the ML hardware model, and investigating the effect of circuit nonidealities on classification accuracy. Simulation of an analog computing EEG seizure detection block shows a classification accuracy of 91%. The proposed modeling approach will significantly reduce design time and complexity of large analog computing systems. Two feature extraction approaches are also compared for an analog computing architecture.more » « less
-
null (Ed.)The benefit of integrating batches of genomic data to increase statistical power is often hindered by batch effects, or unwanted variation in data caused by differences in technical factors across batches. It is therefore critical to effectively address batch effects in genomic data to overcome these challenges. Many existing methods for batch effects adjustment assume the data follow a continuous, bell-shaped Gaussian distribution. However in RNA-seq studies the data are typically skewed, over-dispersed counts, so this assumption is not appropriate and may lead to erroneous results. Negative binomial regression models have been used previously to better capture the properties of counts. We developed a batch correction method, ComBat-seq, using a negative binomial regression model that retains the integer nature of count data in RNA-seq studies, making the batch adjusted data compatible with common differential expression software packages that require integer counts. We show in realistic simulations that the ComBat-seq adjusted data results in better statistical power and control of false positives in differential expression compared to data adjusted by the other available methods. We further demonstrated in a real data example that ComBat-seq successfully removes batch effects and recovers the biological signal in the data.more » « less
An official website of the United States government

Full Text Available